Assignment : DT

Task - 1

  1. Apply Decision Tree Classifier(DecisionTreeClassifier) on these feature sets
    • Set 1: categorical, numerical features + preprocessed_essay (TFIDF) + Sentiment scores(preprocessed_essay)
    • Set 2: categorical, numerical features + preprocessed_essay (TFIDF W2V) + Sentiment scores(preprocessed_essay)
    • </ul> </li>
    • The hyper paramter tuning (best `depth` in range [1, 5, 10, 50], and the best `min_samples_split` in range [5, 10, 100, 500])
      • Find the best hyper parameter which will give the maximum AUC value
      • find the best hyper paramter using k-fold cross validation(use gridsearch cv or randomsearch cv)/simple cross validation data(you can write your own for loops refer sample solution)
      • </ul> </li>
      • Representation of results
        • You need to plot the performance of model both on train data and cross validation data for each hyper parameter, like shown in the figure with X-axis as min_sample_split, Y-axis as max_depth, and Z-axis as AUC Score , we have given the notebook which explains how to plot this 3d plot, you can find it in the same drive 3d_scatter_plot.ipynb
        • or


        • You need to plot the performance of model both on train data and cross validation data for each hyper parameter, like shown in the figure seaborn heat maps with rows as min_sample_split, columns as max_depth, and values inside the cell representing AUC Score
        • You choose either of the plotting techniques out of 3d plot or heat map
        • Once after you found the best hyper parameter, you need to train your model with it, and find the AUC on test data and plot the ROC curve on both train and test.
        • Along with plotting ROC curve, you need to print the confusion matrix with predicted and original labels of test data points
        • Once after you plot the confusion matrix with the test data, get all the `false positive data points`
          • Plot the WordCloud(https://www.geeksforgeeks.org/generating-word-cloud-python/) with the words of essay text of these `false positive data points`
          • Plot the box plot with the `price` of these `false positive data points`
          • Plot the pdf with the `teacher_number_of_previously_posted_projects` of these `false positive data points`
          • </ul> </ul> </li>

Apply Decision Tree Classifier(DecisionTreeClassifier) on these feature set1

1.2 Splitting data into Train and cross validation(or test): Stratified Sampling

1.3 Make Data Model Ready: encoding essay on tfidf vectorizer

1.4 Make Data Model Ready: encoding numerical, categorical features

1.4.1 encoding categorical features: School State

1.4.2 encoding categorical features: teacher_prefix

1.4.3 encoding categorical features: project_grade_category

1.4.4 encoding categorical features: project_subject_categories

1.4.5 encoding categorical features: project_subject_subcategories

1.4.6 encoding numerical features: Price

1.4.7 Concatinating all the features

Apply Decision Tree Classifier(DecisionTreeClassifier) on these feature set2

1.3 Make Data Model Ready: encoding essay on TFIDF weighted W2V

1.4.7 Concatinating all the features

Task - 2

For this task consider set-1 features.

Hint for calculating Sentiment scores

Summary